23 research outputs found

    Flow-Aware Elephant Flow Detection for Software-Defined Networks

    Get PDF
    Software-defined networking (SDN) separates the network control plane from the packet forwarding plane, which provides comprehensive network-state visibility for better network management and resilience. Traffic classification, particularly for elephant flow detection, can lead to improved flow control and resource provisioning in SDN networks. Existing elephant flow detection techniques use pre-set thresholds that cannot scale with the changes in the traffic concept and distribution. This paper proposes a flow-aware elephant flow detection applied to SDN. The proposed technique employs two classifiers, each respectively on SDN switches and controller, to achieve accurate elephant flow detection efficiently. Moreover, this technique allows sharing the elephant flow classification tasks between the controller and switches. Hence, most mice flows can be filtered in the switches, thus avoiding the need to send large numbers of classification requests and signaling messages to the controller. Experimental findings reveal that the proposed technique outperforms contemporary methods in terms of the running time, accuracy, F-measure, and recall

    Impact of Packet Inter-arrival Time Features for Online Peer-to-Peer (P2P) Classification

    Get PDF
    Identification of bandwidth-heavy Internet traffic is important for network administrators to throttle high-bandwidth application traffic. Flow features based classification have been previously proposed as promising method to identify Internet traffic based on packet statistical features. The selection of statistical features plays an important role for accurate and timely classification. In this work, we investigate the impact of packet inter-arrival time feature for online P2P classification in terms of accuracy, Kappa statistic and time. Simulations were conducted using available traces from University of Brescia, University of Aalborg and University of Cambridge. Experimental results show that the inclusion of inter-arrival time (IAT) as an online feature increases simulation time and decreases classification accuracy and Kappa statistic

    Software-Defined Networks for Resource Allocation in Cloud Computing: A Survey

    Get PDF
    Cloud computing has a shared set of resources, including physical servers, networks, storage, and user applications. Resource allocation is a critical issue for cloud computing, especially in Infrastructure-as-a-Service (IaaS). The decision-making process in the cloud computing network is non-trivial as it is handled by switches and routers. Moreover, the network concept drifts resulting from changing user demands are among the problems affecting cloud computing. The cloud data center needs agile and elastic network control functions with control of computing resources to ensure proper virtual machine (VM) operations, traffic performance, and energy conservation. Software-Defined Network (SDN) proffers new opportunities to blueprint resource management to handle cloud services allocation while dynamically updating traffic requirements of running VMs. The inclusion of an SDN for managing the infrastructure in a cloud data center better empowers cloud computing, making it easier to allocate resources. In this survey, we discuss and survey resource allocation in cloud computing based on SDN. It is noted that various related studies did not contain all the required requirements. This study is intended to enhance resource allocation mechanisms that involve both cloud computing and SDN domains. Consequently, we analyze resource allocation mechanisms utilized by various researchers; we categorize and evaluate them based on the measured parameters and the problems presented. This survey also contributes to a better understanding of the core of current research that will allow researchers to obtain further information about the possible cloud computing strategies relevant to IaaS resource allocation

    Critical success factors for ERP systems’ post-implementations of SMEs in Saudi Arabia: a top management and vendors’ perspective

    Get PDF
    Although numerous case studies have determined the critical success factors (CSFs) for enterprise resource planning (ERP) during the adoption and implementation stages, empirical investigations of CSFs for ERP in post-implementation stages (after going live) are in scarcity. As such, this study examined the influence of top management support and vendor support as CSFs on the post-implementation stage of ERP systems in small and medium enterprises (SMEs) established in the Kingdom of Saudi Arabia (KSA). A total of 177 end-users of ERP systems from two manufacturing organizations in KSA that had implemented on-premises ERP systems were involved in this study. Data gathered from structured questionnaires were analyzed using SmartPLS3 and SPSS software programs. The regression analysis was performed to assess the correlations among the variables. Out of seven CSFs identified from the literature, the impact of top management support was significant on user training, competency of internal Information Technology (IT) department, and effective communication between departments, but insignificant on continuous vendor support. Meanwhile, continuous vendor support had a significant influence on continuous integration of the system, but was insignificant on user interfaces and custom code. The study outcomes may serve as practical guidance for effective post-implementation in ERP systems. Referring to the proposed research model, ERP post-implementation success in KSA was significantly influenced by top management support, whereas continuous vendor support displayed a substantial impact on the continuous integration of ERP systems

    Prioritising Organisational Factors Impacting Cloud ERP Adoption and the Critical Issues Related to Security, Usability, and Vendors: A Systematic Literature Review

    Get PDF
    Abstract: Cloud ERP is a type of enterprise resource planning (ERP) system that runs on the vendor’s cloud platform instead of an on-premises network, enabling companies to connect through the Internet. The goal of this study was to rank and prioritise the factors driving cloud ERP adoption by organisations and to identify the critical issues in terms of security, usability, and vendors that impact adoption of cloud ERP systems. The assessment of critical success factors (CSFs) in on-premises ERP adoption and implementation has been well documented; however, no previous research has been carried out on CSFs in cloud ERP adoption. Therefore, the contribution of this research is to provide research and practice with the identification and analysis of 16 CSFs through a systematic literature review, where 73 publications on cloud ERP adoption were assessed from a range of different conferences and journals, using inclusion and exclusion criteria. Drawing from the literature, we found security, usability, and vendors were the top three most widely cited critical issues for the adoption of cloud-based ERP; hence, the second contribution of this study was an integrative model constructed with 12 drivers based on the security, usability, and vendor characteristics that may have greater influence as the top critical issues in the adoption of cloud ERP systems. We also identified critical gaps in current research, such as the inconclusiveness of findings related to security critical issues, usability critical issues, and vendor critical issues, by highlighting the most important drivers influencing those issues in cloud ERP adoption and the lack of discussion on the nature of the criticality of those CSFs. This research will aid in the development of new strategies or the revision of existing strategies and polices aimed at effectively integrating cloud ERP into cloud computing infrastructure. It will also allow cloud ERP suppliers to determine organisations’ and business owners’ expectations and implement appropriate tactics. A better understanding of the CSFs will narrow the field of failure and assist practitioners and managers in increasing their chances of success

    Cyber-Attack Prediction Based on Network Intrusion Detection Systems for Alert Correlation Techniques: A Survey

    Get PDF
    Network Intrusion Detection Systems (NIDS) are designed to safeguard the security needs of enterprise networks against cyber-attacks. However, NIDS networks suffer from several limitations, such as generating a high volume of low-quality alerts. Moreover, 99% of the alerts produced by NIDSs are false positives. As well, the prediction of future actions of an attacker is one of the most important goals here. The study has reviewed the state-of-the-art cyber-attack prediction based on NIDS Intrusion Alert, its models, and limitations. The taxonomy of intrusion alert correlation (AC) is introduced, which includes similarity-based, statistical-based, knowledge-based, and hybrid-based approaches. Moreover, the classification of alert correlation components was also introduced. Alert Correlation Datasets and future research directions are highlighted. The AC receives raw alerts to identify the association between different alerts, linking each alert to its related contextual information and predicting a forthcoming alert/attack. It provides a timely, concise, and high-level view of the network security situation. This review can serve as a benchmark for researchers and industries for Network Intrusion Detection Systems’ future progress and development

    Dynamic Learning Framework for Smooth-Aided Machine-Learning-Based Backbone Traffic Forecasts

    Get PDF
    Recently, there has been an increasing need for new applications and services such as big data, blockchains, vehicle-to-everything (V2X), the Internet of things, 5G, and beyond. Therefore, to maintain quality of service (QoS), accurate network resource planning and forecasting are essential steps for resource allocation. This study proposes a reliable hybrid dynamic bandwidth slice forecasting framework that combines the long short-term memory (LSTM) neural network and local smoothing methods to improve the network forecasting model. Moreover, the proposed framework can dynamically react to all the changes occurring in the data series. Backbone traffic was used to validate the proposed method. As a result, the forecasting accuracy improved significantly with the proposed framework and with minimal data loss from the smoothing process. The results showed that the hybrid moving average LSTM (MLSTM) achieved the most remarkable improvement in the training and testing forecasts, with 28% and 24% for long-term evolution (LTE) time series and with 35% and 32% for the multiprotocol label switching (MPLS) time series, respectively, while robust locally weighted scatter plot smoothing and LSTM (RLWLSTM) achieved the most significant improvement for upstream traffic with 45%; moreover, the dynamic learning framework achieved improvement percentages that can reach up to 100%

    Integration of hybrid networks, AI, Ultra Massive-MIMO, THz frequency, and FBMC modulation toward 6g requirements : A Review

    Get PDF
    The fifth-generation (5G) wireless communications have been deployed in many countries with the following features: wireless networks at 20 Gbps as peak data rate, a latency of 1-ms, reliability of 99.999%, maximum mobility of 500 km/h, a bandwidth of 1-GHz, and a capacity of 106 up to Mbps/m2. Nonetheless, the rapid growth of applications, such as extended/virtual reality (XR/VR), online gaming, telemedicine, cloud computing, smart cities, the Internet of Everything (IoE), and others, demand lower latency, higher data rates, ubiquitous coverage, and better reliability. These higher requirements are the main problems that have challenged 5G while concurrently encouraging researchers and practitioners to introduce viable solutions. In this review paper, the sixth-generation (6G) technology could solve the 5G limitations, achieve higher requirements, and support future applications. The integration of multiple access techniques, terahertz (THz), visible light communications (VLC), ultra-massive multiple-input multiple-output ( ÎĽm -MIMO), hybrid networks, cell-free massive MIMO, and artificial intelligence (AI)/machine learning (ML) have been proposed for 6G. The main contributions of this paper are a comprehensive review of the 6G vision, KPIs (key performance indicators), and advanced potential technologies proposed with operation principles. Besides, this paper reviewed multiple access and modulation techniques, concentrating on Filter-Bank Multicarrier (FBMC) as a potential technology for 6G. This paper ends by discussing potential applications with challenges and lessons identified from prior studies to pave the path for future research

    Resource Exhaustion Attack Detection Scheme for WLAN Using Artificial Neural Network

    Get PDF
    IEEE 802.11 Wi-Fi networks are prone to many denial of service (DoS) attacks due to vulnerabilities at the media access control (MAC) layer of the 802.11 protocol. Due to the data transmission nature of the wireless local area network (WLAN) through radio waves, its communication is exposed to the possibility of being attacked by illegitimate users. Moreover, the security design of the wireless structure is vulnerable to versatile attacks. For example, the attacker can imitate genuine features, rendering classification-based methods inaccurate in differentiating between real and false messages. Although many security standards have been proposed over the last decades to overcome many wireless network attacks, effectively detecting such attacks is crucial in today’s real-world applications. This paper presents a novel resource exhaustion attack detection scheme (READS) to detect resource exhaustion attacks effectively. The proposed scheme can differentiate between the genuine and fake management frames in the early stages of the attack such that access points can effectively mitigate the consequences of the attack. The scheme is built through learning from clustered samples using artificial neural networks to identify the genuine and rogue resource exhaustion management frames effectively and efficiently in the WLAN. The proposed scheme consists of four modules which make it capable to alleviates the attack impact more effectively than the related work. The experimental results show the effectiveness of the proposed technique by gaining an 89.11% improvement compared to the existing works in terms of detection

    Forest Fire Detection Using New Routing Protocol

    Get PDF
    The Mobile Ad-Hoc Network (MANET) has received significant interest from researchers for several applications. In spite of developing and proposing numerous routing protocols for MANET, there are still routing protocols that are too inefficient in terms of sending data and energy consumption, which limits the lifetime of the network for forest fire monitoring. Therefore, this paper presents the development of a Location Aided Routing (LAR) protocol in forest fire detection. The new routing protocol is named the LAR-Based Reliable Routing Protocol (LARRR), which is used to detect a forest fire based on three criteria: the route length between nodes, the temperature sensing, and the number of packets within node buffers (i.e., route busyness). The performance of the LARRR protocol is evaluated by using widely known evaluation measurements, which are the Packet Delivery Ratio (PDR), Energy Consumption (EC), End-to-End Delay (E2E Delay), and Routing Overhead (RO). The simulation results show that the proposed LARRR protocol achieves 70% PDR, 403 joules of EC, 2.733 s of E2E delay, and 43.04 RO. In addition, the performance of the proposed LARRR protocol outperforms its competitors and is able to detect forest fires efficiently
    corecore